Hebbian Plasticity Realigns Grid Cell Activity with External Sensory Cues in Continuous Attractor Models

نویسندگان

  • Marcello Mulas
  • Nicolai Waniek
  • Jörg Conradt
چکیده

After the discovery of grid cells, which are an essential component to understand how the mammalian brain encodes spatial information, three main classes of computational models were proposed in order to explain their working principles. Amongst them, the one based on continuous attractor networks (CAN), is promising in terms of biological plausibility and suitable for robotic applications. However, in its current formulation, it is unable to reproduce important electrophysiological findings and cannot be used to perform path integration for long periods of time. In fact, in absence of an appropriate resetting mechanism, the accumulation of errors over time due to the noise intrinsic in velocity estimation and neural computation prevents CAN models to reproduce stable spatial grid patterns. In this paper, we propose an extension of the CAN model using Hebbian plasticity to anchor grid cell activity to environmental landmarks. To validate our approach we used as input to the neural simulations both artificial data and real data recorded from a robotic setup. The additional neural mechanism can not only anchor grid patterns to external sensory cues but also recall grid patterns generated in previously explored environments. These results might be instrumental for next generation bio-inspired robotic navigation algorithms that take advantage of neural computation in order to cope with complex and dynamic environments.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Accurate Path Integration in Continuous Attractor Network Models of Grid Cells

Grid cells in the rat entorhinal cortex display strikingly regular firing responses to the animal's position in 2-D space and have been hypothesized to form the neural substrate for dead-reckoning. However, errors accumulate rapidly when velocity inputs are integrated in existing models of grid cell activity. To produce grid-cell-like responses, these models would require frequent resets trigge...

متن کامل

Continuous or discrete attractors in neural circuits? A self-organized switch at maximal entropy

A recent experiment suggests that neural circuits may alternatively implement continuous or discrete attractors, depending on the training set up. In recurrent neural network models, continuous and discrete attractors are separately modeled by distinct forms of synaptic prescriptions (learning rules). Here, we report a solvable network model, endowed with Hebbian synaptic plasticity, which is a...

متن کامل

Place Cells, Grid Cells, Attractors, and Remapping

Place and grid cells are thought to use a mixture of external sensory information and internal attractor dynamics to organize their activity. Attractor dynamics may explain both why neurons react coherently following sufficiently large changes to the environment (discrete attractors) and how firing patterns move smoothly from one representation to the next as an animal moves through space (cont...

متن کامل

III - 14 . Time - warped PCA : simultaneous alignment and dimensionality reduc - tion of neural data

firing patterns, e.g. by producing stochastic drift in grid attractor networks. Recently, Hardcastle et al. (Neuron, 2015) proposed that border cells may provide one mechanism for correcting such drift. We construct a model in which experience-dependent Hebbian plasticity during exploration allows border cells to self-organize their responses, while also learning connectivity to grid cells whic...

متن کامل

Learning in sparse attractor networks with inhibition

Attractor networks are important models for brain functions on a behavioral and physiological level, but learning on sparse patterns has not been fully explained. Here we show that the inclusion of the activity dependent effect of an inhibitory pool in Hebbian learning can accomplish learning of stable sparse attractors in both, continuous attractor and point attractor neural networks.

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Frontiers in computational neuroscience

دوره 10  شماره 

صفحات  -

تاریخ انتشار 2016